1. Configure Network Teaming

In this lab, you create a network team interface called team0 with a static IP address of .100/24 that is built on two port interfaces: eth1 and eth2. It will be a fault-tolerant/active-backup interface.

  1. Reset the server1.example.com system.

  2. Become the root user.

  3. Display the current state of the existing network interfaces. eth1 and eth2 will be the interfaces that will be the ports for the teamed interface.

    [root@server1 ~]# ip link
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
        link/ether 52:54:00:00:XX:0b brd ff:ff:ff:ff:ff:ff
    4: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
        link/ether 00:10:18:2b:98:85 brd ff:ff:ff:ff:ff:ff
    6: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
        link/ether 64:31:50:18:80:8f brd ff:ff:ff:ff:ff:ff
  4. Create a active-backup teaming interface called team0 and assign its IPv4 settings.

    1. Create the team0 connection.

      [root@server1 ~]# nmcli con add type team con-name team0 ifname team0 config '{"runner": {"name": "activebackup"}}'
      Connection 'team0' (5dc435ac-e4ac-403a-8e8f-163b163bf49b) successfully added.
    2. Define the IPv4 settings for team0. Assign it the IP address 192.168.1.100/24; the method for IP should be static.

      [root@server1 ~]# nmcli con mod team0 ipv4.addresses '192.168.1.100/24'
      [root@server1 ~]# nmcli con mod team0 ipv4.method manual
  5. Assign eth1 and eth2 as port interfaces for team0.

    [root@server1 ~]# nmcli con add type team-slave con-name team0-port1 ifname eth1 master team0
    Connection 'team0-port1' (f5664c4e-1dba-43f8-8427-35aee0594ed3) successfully added.
    [root@server1 ~]# nmcli con add type team-slave con-name team0-port2 ifname eth2 master team0
    Connection 'team0-port2' (174e4402-b169-47d1-859f-9a4b3f30000f) successfully added.
  6. Check the current state of the teamed interfaces on the system.

    [root@server1 ~]# teamdctl team0 state
    setup:
      runner: activebackup
    ports:
      eth1
        link watches:
          link summary: up
          instance[link_watch_0]:
            name: ethtool
            link: up
      eth2
        link watches:
          link summary: up
          instance[link_watch_0]:
            name: ethtool
            link: up
    runner:
      active port: eth1
    • Notice how the teamed interface immediately becomes active with eth1 as the active port.

  7. Open another terminal and ping the local network gateway through the team0 interface. Let this ping continue to run as you perform the following steps.

    [student@server1 ~]$ ping -I team0 192.168.1.254
    PING .254 (192.168.1.254) from 192.168.1.100 team0: 56(84) bytes of data.
    64 bytes from .254: icmp_seq=10 ttl=64 time=1.08 ms
    64 bytes from .254: icmp_seq=11 ttl=64 time=0.789 ms
    64 bytes from .254: icmp_seq=12 ttl=64 time=0.906 ms
    ...Output omitted...
  8. Go back to the other root terminal. Bring the active port of the teamed interface down and see its impact upon team0.

    [root@server1 ~]# nmcli dev dis eth1
    [root@server1 ~]# teamdctl team0 state
    setup:
      runner: activebackup
    ports:
      eth2
        link watches:
          link summary: up
          instance[link_watch_0]:
            name: ethtool
            link: up
    runner:
      active port: eth2
    • The ping continues to work because team0 switched over to the remaining port.

  9. Bring the original port interface back up and bring the other port interface down.

    [root@server1 ~]# nmcli con up team0-port1
    Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/10)
    [root@server1 ~]# nmcli dev dis eth2
    [root@server1 ~]# teamdctl team0 state
    setup:
      runner: activebackup
    ports:
      eth1
        link watches:
          link summary: up
          instance[link_watch_0]:
            name: ethtool
            link: up
    runner:
      active port: eth1
    • Again, note how the ping continues to reach the local gateway.

  10. Bring the down port interface back up and observe how it affects the teamed interface, team0.

    [root@server1 ~]# nmcli con up team0-port2
    Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/11)
    [root@server1 ~]# teamdctl team0 state
    setup:
      runner: activebackup
    ports:
      eth1
        link watches:
          link summary: up
          instance[link_watch_0]:
            name: ethtool
            link: up
      eth2
        link watches:
          link summary: up
          instance[link_watch_0]:
            name: ethtool
            link: up
    runner:
      active port: eth1
    • The ping continues to contact the local network gateway, and the currently active port interface does not change when both port interfaces are available.

2. Manage Network Teaming

In this lab, you manage your network team interface. You first deactivate it, change its runner to roundrobin, and then reactivate it and change the runner back to activebackup. To get information about the team interface, you use teamdctl and teamnl.

Before you begin, complete the previous exercise: Configuring Network Teaming. This exercise uses the teamed interfaces you created in the previous exercise.
  1. Log in to your server system and become root.

  2. Get the initial state of the teamed interfaces on the system.

    [root@server1 ~]# teamdctl team0 state
    setup:
      runner: activebackup
    ports:
      eth1
        link watches:
          link summary: up
          instance[link_watch_0]:
            name: ethtool
            link: up
      eth2
        link watches:
          link summary: up
          instance[link_watch_0]:
            name: ethtool
            link: up
    runner:
      active port: eth1
  3. Examine the configuration files created by NetworkManager that apply to the team interfaces and its ports.

    1. Display the file for the team interface and note how it defines the runner to be used and the IPv4 network settings for the interface.

      [root@server1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-team0
      DEVICE=team0
      TEAM_CONFIG="{\"runner\": {\"name\": \"activebackup\"}}"
      DEVICETYPE=Team
      BOOTPROTO=none
      DEFROUTE=yes
      IPV4_FAILURE_FATAL=no
      IPV6INIT=yes
      IPV6_AUTOCONF=yes
      IPV6_DEFROUTE=yes
      IPV6_FAILURE_FATAL=no
      NAME=team0
      UUID=5dc435ac-e4ac-403a-8e8f-163b163bf49b
      ONBOOT=yes
      IPADDR0=192.168.1.100
      PREFIX0=24
      IPV6_PEERDNS=yes
      IPV6_PEERROUTES=yes
    2. Display the configuration files for the port interfaces. Take special notice of the values of the TEAM_MASTER and DEVICETYPE shell variables.

      [root@server1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-team0-port1
      BOOTPROTO=dhcp
      DEFROUTE=yes
      PEERDNS=yes
      PEERROUTES=yes
      IPV4_FAILURE_FATAL=no
      IPV6INIT=yes
      IPV6_AUTOCONF=yes
      IPV6_DEFROUTE=yes
      IPV6_PEERDNS=yes
      IPV6_PEERROUTES=yes
      IPV6_FAILURE_FATAL=no
      NAME=team0-port1
      UUID=f5664c4e-1dba-43f8-8427-35aee0594ed3
      DEVICE=eth1
      ONBOOT=yes
      TEAM_MASTER=team0
      DEVICETYPE=TeamPort
      [root@server1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-team0-port2
      BOOTPROTO=dhcp
      DEFROUTE=yes
      PEERDNS=yes
      PEERROUTES=yes
      IPV4_FAILURE_FATAL=no
      IPV6INIT=yes
      IPV6_AUTOCONF=yes
      IPV6_DEFROUTE=yes
      IPV6_PEERDNS=yes
      IPV6_PEERROUTES=yes
      IPV6_FAILURE_FATAL=no
      NAME=team0-port2
      UUID=174e4402-b169-47d1-859f-9a4b3f30000f
      DEVICE=eth2
      ONBOOT=yes
      TEAM_MASTER=team0
      DEVICETYPE=TeamPort
  4. Bring the team0 interface down and edit the configuration file to use the roundrobin runner.

    1. Bring the team0 interface down.

      [root@server1 ~]# nmcli dev dis team0
    2. Edit the configuration file for team0 and adjust it to use the roundrobin runner.

      [root@server1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-team0
      [root@server1 ~]# grep runner /etc/sysconfig/network-scripts/ifcfg-team0
      TEAM_CONFIG="{\"runner\": {\"name\": \"roundrobin\"}}"
    3. Use nmcli to make NetworkManager reload the updated configuration.

      [root@server1 ~]# nmcli con load /etc/sysconfig/network-scripts/ifcfg-team0
  5. Bring the team0 interface back up.

    1. Tell NetworkManager to activate team0.

      [root@server1 ~]# nmcli con up team0
      Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/12)
    2. Check the current state of team0.

      [root@server1 ~]# teamdctl team0 state
      setup:
        runner: roundrobin
    3. Log in to another window and ping the local network gateway through the team0 interface.

      [root@server1 ~]# ping -I team0 192.168.1.254
      PING 192.168.1.254 (192.168.1.254) from 192.168.1.100 team0: 56(84) bytes of data.
      From 192.168.1.100 icmp_seq=1 Destination Host Unreachable
      From 192.168.1.100 icmp_seq=2 Destination Host Unreachable
      From 192.168.1.100 icmp_seq=3 Destination Host Unreachable
      ...Output omitted...
      • The ping command fails to contact the gateway because the teamed interface does not have any active ports.

  6. Activate one of the port interfaces for team0.

    1. Use nmcli to activate the team0-port1 interface.

      [root@server1 ~]# nmcli con up team0-port1
      Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/13)
    2. Display the state of the teamed interface.

      [root@server1 ~]# teamdctl team0 state
      setup:
        runner: roundrobin
      ports:
        eth1
          link watches:
            link summary: up
            instance[link_watch_0]:
              name: ethtool
              link: up
    3. ping the local network gateway through the team0 interface. It should be able to reach the gateway.

      [root@server1 ~]# ping -I team0 192.168.1.254
      PING 192.168.1.254 (192.168.1.254) from 192.168.1.100 team0: 56(84) bytes of data.
      64 bytes from 192.168.1.254: icmp_seq=1 ttl=64 time=0.516 ms
      64 bytes from 192.168.1.254: icmp_seq=2 ttl=64 time=0.703 ms
      64 bytes from 192.168.1.254: icmp_seq=3 ttl=64 time=0.422 ms
  7. Bring up the other team port for team0.

    1. Use nmcli to activate the team0-port2 interface.

      [root@server1 ~]# nmcli con up team0-port2
      Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/14)
    2. Display the updated state of the teamed interface.

      [root@server1 ~]# teamdctl team0 state
      setup:
        runner: roundrobin
      ports:
        eth1
          link watches:
            link summary: up
            instance[link_watch_0]:
              name: ethtool
              link: up
        eth2
          link watches:
            link summary: up
            instance[link_watch_0]:
              name: ethtool
              link: up
  8. Use teamdctl to display the configuration for team0.

    [root@server1 ~]# teamdctl team0 config dump
    {
        "device": "team0",
        "ports": {
            "eth1": {
                "link_watch": {
                    "name": "ethtool"
                }
            },
            "eth2": {
                "link_watch": {
                    "name": "ethtool"
                }
            }
        },
        "runner": {
            "name": "roundrobin"
        }
    }
  9. Use the teamnl command to display the tunable options for team0.

    [root@server1 ~]# teamnl team0 options
     queue_id (port:eth2) 0
     priority (port:eth2) 0
     user_linkup_enabled (port:eth2) false
     user_linkup (port:eth2) true
     enabled (port:eth2) true
     queue_id (port:eth1) 0
     priority (port:eth1) 0
     user_linkup_enabled (port:eth1) false
     user_linkup (port:eth1) true
     enabled (port:eth1) true
     mcast_rejoin_interval 0
     mcast_rejoin_count 0
     notify_peers_interval 0
     notify_peers_count 0
     mode roundrobin
  10. Modify the team0 interface so it uses the activebackup runner instead of roundrobin.

    1. The interface can only be modified after it is brought down.

      [root@server1 ~]# nmcli dev dis team0
    2. Use nmcli to tune the teamed interface to use the activebackup runner.

      [root@server1 ~]# nmcli con mod team0 team.config '{"runner": {"name": "activebackup"}}'
    3. Examine the changes made to the interface’s configuration file.

      [root@server1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-team0
      DEVICE=team0
      TEAM_CONFIG="{\"runner\": {\"name\": \"activebackup\"}}"
      DEVICETYPE=Team
      BOOTPROTO=none
      DEFROUTE=yes
      IPV4_FAILURE_FATAL=no
      IPV6INIT=yes
      IPV6_AUTOCONF=yes
      IPV6_DEFROUTE=yes
      IPV6_FAILURE_FATAL=no
      NAME=team0
      UUID=5dc435ac-e4ac-403a-8e8f-163b163bf49b
      ONBOOT=yes
      IPADDR0=192.168.1.100
      PREFIX0=24
      IPV6_PEERDNS=yes
      IPV6_PEERROUTES=yes
  11. Reactivate the teamed interface and both of its port interfaces.

    1. Use nmcli to activate the teamed interface.

      [root@server1 ~]# nmcli con up team0
      Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/15)
    2. Display its initial state. Note that the port interfaces did not get activated.

      [root@server1 ~]# teamdctl team0 state
      setup:
        runner: activebackup
      runner:
        active port:
    3. Activate the two port interfaces and display the resulting teamed interface state.

      [root@server1 ~]# nmcli con up team0-port1
      Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/16)
      [root@server1 ~]# nmcli con up team0-port2
      Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/17)
      [root@server1 ~]# teamdctl team0 state
      setup:
        runner: activebackup
      ports:
        eth1
          link watches:
            link summary: up
            instance[link_watch_0]:
              name: ethtool
              link: up
        eth2
          link watches:
            link summary: up
            instance[link_watch_0]:
              name: ethtool
              link: up
      runner:
        active port: eth1
    4. Restart ping and make sure it works once the first port interface is brought back up.

      [root@server1 ~]# ping -I team0 192.168.1.254
      PING 192.168.1.254 (192.168.1.254) from 192.168.1.100 team0: 56(84) bytes of data.
      64 bytes from 192.168.1.254: icmp_seq=1 ttl=64 time=0.516 ms
      64 bytes from 192.168.1.254: icmp_seq=2 ttl=64 time=0.703 ms
      64 bytes from 192.168.1.254: icmp_seq=3 ttl=64 time=0.422 ms
  12. Use the teamnl to display the options available to an activebackup team device.

    [root@server1 ~]# teamnl team0 options
     queue_id (port:eth2) 0
     priority (port:eth2) 0
     user_linkup_enabled (port:eth2) false
     user_linkup (port:eth2) true
     enabled (port:eth2) false
     queue_id (port:eth1) 0
     priority (port:eth1) 0
     user_linkup_enabled (port:eth1) false
     user_linkup (port:eth1) true
     enabled (port:eth1) true
     activeport 3
     mcast_rejoin_interval 0
     mcast_rejoin_count 1
     notify_peers_interval 0
     notify_peers_count 1
     mode activebackup

3. Configure Software Bridges

In this lab, you create a network bridge called br1. This bridge is attached to the eth1 network interface and has a static IP address of 192.168.1.100/24.

  1. Reset the server1.example.com system.

  2. Become the root user.

  3. Define a software bridge called br1 and assign it a static IPv4 address of 192.168.1.100/24.

    1. Use nmcli to create the software bridge.

      [root@server1 ~]# nmcli con add type bridge con-name br1 ifname br1
      Connection 'br1' (d9d56520-574a-4e2a-9f43-b593a1bdff61) successfully added.
    2. Configure the IPv4 addressing for the interface.

      [root@server1 ~]# nmcli con mod br1 ipv4.addresses 192.168.1.100/24
      [root@server1 ~]# nmcli con mod br1 ipv4.method manual
  4. Attach the eth1 interface to the br1 software bridge.

    [root@server1 ~]# nmcli con add type bridge-slave con-name br1-port0 ifname eth1 master br1
    Connection 'br1-port0' (5f5e7ea8-b507-4c10-a61f-779369cf82ee) successfully added.
  5. Inspect the configuration files that were created for the software bridge by NetworkManager. Look for variables that connect the two interfaces.

    [root@server1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-br1
    DEVICE=br1
    STP=yes
    TYPE=Bridge
    BOOTPROTO=none
    DEFROUTE=yes
    IPV4_FAILURE_FATAL=no
    IPV6INIT=yes
    IPV6_AUTOCONF=yes
    IPV6_DEFROUTE=yes
    IPV6_FAILURE_FATAL=no
    NAME=br1
    UUID=d9d56520-574a-4e2a-9f43-b593a1bdff61
    ONBOOT=yes
    IPADDR0=192.168.1.100
    PREFIX0=24
    BRIDGING_OPTS=priority=32768
    IPV6_PEERDNS=yes
    IPV6_PEERROUTES=yes
    [root@server1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-br1-port0
    TYPE=Ethernet
    NAME=br1-port0
    UUID=5f5e7ea8-b507-4c10-a61f-779369cf82ee
    DEVICE=eth1
    ONBOOT=yes
    BRIDGE=br1
  6. Display the link status of the network interfaces.

    [root@server1 ~]# ip link
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
        link/ether 52:54:00:00:XX:0b brd ff:ff:ff:ff:ff:ff
    4: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br1 state UP mode DEFAULT qlen 1000
        link/ether 00:10:18:2b:98:85 brd ff:ff:ff:ff:ff:ff
    6: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
        link/ether 64:31:50:18:80:8f brd ff:ff:ff:ff:ff:ff
    7: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT
        link/ether 00:10:18:2b:98:85 brd ff:ff:ff:ff:ff:ff
  7. Use brctl to display information about software bridges on the system.

    [root@server1 ~]# brctl show
    bridge name     bridge id               STP enabled     interfaces
    br1             8000.52540001050b       yes             eth1
  8. Ping the local network gateway using the software bridge.

    [root@server1 ~]# ping -I br1 192.168.1.254
    PING 192.168.1.254 (192.168.1.254) from 192.168.1.100 br1: 56(84) bytes of data.
    64 bytes from 192.168.1.254: icmp_seq=10 ttl=64 time=0.520 ms
    64 bytes from 192.168.1.254: icmp_seq=11 ttl=64 time=0.470 ms
    64 bytes from 192.168.1.254: icmp_seq=12 ttl=64 time=0.339 ms
    64 bytes from 192.168.1.254: icmp_seq=13 ttl=64 time=0.294 ms
    ...Output omitted...

In this lab, you create a bridge that is connected to a network team interface.

  1. Reset the server1.example.com system.

  2. Become the root user.

  3. Confirm that eth1 and eth2 are available for use. Display the current state of the existing network interfaces. eth1 and eth2 will be the port interfaces that will be teamed into a single interface.

    [root@server1 ~]# ip link
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
        link/ether 52:54:00:00:XX:0b brd ff:ff:ff:ff:ff:ff
    4: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
        link/ether 00:10:18:2b:98:85 brd ff:ff:ff:ff:ff:ff
    6: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
        link/ether 64:31:50:18:80:8f brd ff:ff:ff:ff:ff:ff
  4. Create an activebackup network team interface called team0.

    1. Create the network team interface and call it team0.

      [root@server1 ~]# nmcli con add type team con-name team0 ifname team0 config '{"runner": {"name": "activebackup"}}'
      Connection 'team0' (2f608473-ff8b-4a0d-b250-79567e3f4a13) successfully added.
    2. Assign eth1 and eth2 as network port interfaces for team0.

      [root@server1 ~]# nmcli con add type team-slave con-name team0-port1 ifname eth1 master team0
      Connection 'team0-port1' (3367d0ef-deb5-444b-bc01-0ed3825615a9) successfully added.
      [root@server1 ~]# nmcli con add type team-slave con-name team0-port2 ifname eth2 master team0
      Connection 'team0-port2' (4951d25b-a454-4735-8c7b-a51b983df56b) successfully added.
  5. Confirm the team0 interface is up and working properly.

    [root@server1 ~]# teamdctl team0 state
    setup:
      runner: activebackup
    ports:
      eth1
        link watches:
          link summary: up
          instance[link_watch_0]:
            name: ethtool
            link: up
      eth2
        link watches:
          link summary: up
          instance[link_watch_0]:
            name: ethtool
            link: up
    runner:
      active port: eth1
  6. Disable the team0 device then the NetworkManager service.

    [root@server1 ~]# nmcli dev dis team0
    [root@server1 ~]# systemctl stop NetworkManager
    [root@server1 ~]# systemctl disable NetworkManager
    rm '/etc/systemd/system/multi-user.target.wants/NetworkManager.service'
    rm '/etc/systemd/system/dbus-org.freedesktop.NetworkManager.service'
    rm '/etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service'
  7. Manipulate the interface configuration files so that the team0 interface is attached to a software bridge called brteam0 with a static IP address of 192.168.1.100/24.

    1. List the original interface configuration files that were created by NetworkManager:

      [root@server1 ~]# cd /etc/sysconfig/network-scripts/
      [root@server1 network-scripts]# ls -1 ifcfg-*
      ifcfg-eth0
      ifcfg-lo
      ifcfg-team0
      ifcfg-team0-port1
      ifcfg-team0-port2
    2. Edit ifcfg-team0 to define a BRIDGE variable that assigns it to the new bridge about to be created, brteam0.

      [root@server1 network-scripts]# echo "BRIDGE=brteam0" >> ifcfg-team0
      [root@server1 network-scripts]# tail ifcfg-team0
      IPV6INIT=yes
      IPV6_AUTOCONF=yes
      IPV6_DEFROUTE=yes
      IPV6_PEERDNS=yes
      IPV6_PEERROUTES=yes
      IPV6_FAILURE_FATAL=no
      NAME=team0
      UUID=2f608473-ff8b-4a0d-b250-79567e3f4a13
      ONBOOT=yes
      BRIDGE=brteam0
    3. Remove the IP configuration information from the configuration files for the team port interfaces. Replace all text in ifcfg-team0-port1 with the below text.

      [root@server1 network-scripts]# vim ifcfg-team0-port1
      [root@server1 network-scripts]# cat ifcfg-team0-port1
      NAME=team0-port1
      DEVICE=eth1
      ONBOOT=yes
      TEAM_MASTER=team0
      DEVICETYPE=TeamPort
    4. Remove the IP configuration information from the configuration files for the team port interfaces. Replace all text in ifcfg-team0-port2 with the below text.

      [root@server1 network-scripts]# vim ifcfg-team0-port2
      [root@server1 network-scripts]# cat ifcfg-team0-port2
      NAME=team0-port2
      DEVICE=eth2
      ONBOOT=yes
      TEAM_MASTER=team0
      DEVICETYPE=TeamPort
    5. Create a new interface configuration file for the bridge, ifcfg-brteam0. Define the IP configuration information in that file. Replace the contents of ifcfg-brteam0 with the text below.

      [root@server1 network-scripts]# vim ifcfg-brteam0
      [root@server1 network-scripts]# cat ifcfg-brteam0
      DEVICE=brteam0
      ONBOOT=yes
      TYPE=Bridge
      IPADDR0=192.168.1.100
      PREFIX0=24
  8. Reset the network to start the new bridge, brteam0, and reactivate the team0 interface.

    [root@server1 network-scripts]# systemctl restart network

    You may see an error Job for network.service failed. See 'systemctl status network.service' and 'journalctl -xn' for details. This should not happen if you do a clean reboot. You can ignore this for now.

  9. Test the network configuration.

    1. Use ping to see if the local gateway, 192.168.1.254, can be reached through the brteam0 interface.

      [root@server1 ~]# ping -I brteam0 192.168.1.254
      PING 192.168.1.254 (192.168.1.254) from 192.168.1.100 brteam0: 56(84) bytes of data.
      64 bytes from 192.168.1.254: icmp_seq=1 ttl=64 time=0.172 ms
      64 bytes from 192.168.1.254: icmp_seq=2 ttl=64 time=0.091 ms
      64 bytes from 192.168.1.254: icmp_seq=3 ttl=64 time=0.052 ms
      ... Output omitted ...

server1.example.com now has an activebackup team interface, called team0. The team0 interface is built on the port interfaces eth1 and eth2. team0 is attached to a bridge, called brteam0, and the bridge has a static IP address of 192.168.1.100/24.